In this paper, we propose a novel unsupervised domain adaptation algorithmbased on deep learning for visual object recognition. Specifically, we design anew model called Deep Reconstruction-Classification Network (DRCN), whichjointly learns a shared encoding representation for two tasks: i) supervisedclassification of labeled source data, and ii) unsupervised reconstruction ofunlabeled target data.In this way, the learnt representation not only preservesdiscriminability, but also encodes useful information from the target domain.Our new DRCN model can be optimized by using backpropagation similarly as thestandard neural networks. We evaluate the performance of DRCN on a series of cross-domain objectrecognition tasks, where DRCN provides a considerable improvement (up to ~8% inaccuracy) over the prior state-of-the-art algorithms. Interestingly, we alsoobserve that the reconstruction pipeline of DRCN transforms images from thesource domain into images whose appearance resembles the target dataset. Thissuggests that DRCN's performance is due to constructing a single compositerepresentation that encodes information about both the structure of targetimages and the classification of source images. Finally, we provide a formalanalysis to justify the algorithm's objective in domain adaptation context.
展开▼